# High-performance Text Generation
MT1 Gen11 Gemma 2 9B
This is a pre-trained language model merged using mergekit and the DARE TIES method, based on multiple Gemma-2-9B variant models.
Large Language Model
Transformers

M
zelk12
29
2
Beyonder 4x7B V2
Other
Beyonder-4x7B-v2 is a large language model based on the Mixture of Experts (MoE) architecture, consisting of 4 expert modules, each specializing in different domains such as dialogue, programming, creative writing, and mathematical reasoning.
Large Language Model
Transformers

B
mlabonne
758
130
Featured Recommended AI Models